Skip to main content

Performance Benchmark

This document presents the results of an experiment designed to measure the latency between TimeBase server receiving a message and its subsequent retrieval by the dxapi Python client. All measurements were conducted on an Amazon EC2 c6i.2xlarge instance (8 vCPU, 16 GB RAM), with the TimeBase server, java client loader, and dxapi client co-located on the same instance.

Message size payload was 40 bytes (10 float numbers).

Loading message rate: 10000 msgs/s

Percentile : Microseconds : Message count
50.0% : 829.760 : 600000
75.0% : 1087.232 : 900000
90.0% : 1283.840 : 1080000
99.0% : 1511.488 : 1188000
99.9% : 1593.792 : 1198800
99.99% : 1626.368 : 1199880
99.999% : 3744.256 : 1199988
99.9999% : 5159.898 : 1199999
100% : 5161.792 : 1200001

Loading message rate: 50000 msgs/s

Percentile : Microseconds : Message count
50.0% : 839.552 : 3000001
75.0% : 1089.408 : 4500002
90.0% : 1283.840 : 5400002
99.0% : 1514.112 : 5940002
99.9% : 1592.128 : 5994002
99.99% : 3822.464 : 5999402
99.999% : 13719.872 : 5999942
99.9999% : 14661.504 : 5999996
100% : 14677.888 : 6000003

Loading message rate: 100000 msgs/s

Percentile : Microseconds : Message count
50.0% : 846.080 : 5000000
75.0% : 1094.912 : 7500000
90.0% : 1288.704 : 9000000
99.0% : 1520.896 : 9900000
99.9% : 1600.512 : 9990000
99.99% : 3985.760 : 9999000
99.999% : 13873.056 : 9999900
99.9999% : 14632.704 : 9999990
100% : 14643.102 : 10000000